717 research outputs found
Merging Modest with Complexity
This thesis will enable a retrospective critical examination of aspects of my practice as an artist from 2005 - 2020. The research question addresses the implication of multiple forms of inter-reliance enabled in the practice. This will be enabled by opening a discursive space that retrospectively, integrates and critically examines the role and function of inter-reliance as a structural methodology and how this is implicated in the practice over this period.
This thesis will use term inter-reliance to define a play of relations where individual art works when viewed in isolation exist only in partial illumination as a form of penumbra. The art works are inchoate as separate entities only becoming activated or fully realised when engaged with collectively and interdiscursively, as a set of enabled relationships. In each of the chapters inter-reliance is manifested as a set of specific enabled reciprocal relationships between artistic mechanisms and physical, perceptual, associative, sonic, contextual and cinematic space. Rather than make art for art’s sake or art that specifically engages with trends or tendencies within the art world, it will elucidate how the practice is relational and empathetic, facilitating an inter-reliance between artist and viewer and artist and society, the practice engages with and reflects upon broader society where articulations of ideological positions are subtly embedded
Songs and the Soil
Published in conjuction with an exhibition. The exhibition engages with the subjects of landscape and music/sound—exploring each element from historical, social and culturally associative perspectives; where landscape is recognised as a fluid term articulating physical space, idealised space and social space that reflects a convergence of physical processes and cultural meaning, and where song act as a response to, or archive, of personal, historical or socio-political instances. Several works engage landscape and musical sound intersect. The exhibition integrates a broad range of media,positions and responses to these research subjects; including two film works, a six-hour soundtrack for a room, sonic sculptures, a series of sculptural interventions, paintings, analogue photography, screen prints, ceramics and flowers. In most instances, a number of these elements combine to form installations. The selected texts that feature in this publication do not relate to these artworks directly, but either explicitly or obliquely engage with the broader research subject of landscape and music/sound. This collaborative project integrates one existing text and six commissioned texts by Mark Garry, John Graham, Joanne Laws, Sharon Phelan and Suzanne Walsh. The publication also includes a transcription of a radio interview from 1974 with Charles Amirkhanian and the musician Robbie Basho. This interview discusses the broad scope of Basho’s music and the remarkable generosity and fluidity of music as a cultural form. In particular, the dialogue explores music’s openness and potential to continuously evolve and incorporate diverse influences, styles and forms. This collaborative relationship is echoed in the design and the editorial process of the publication. Rather than passively catalogue the exhibition, the selected images act as visual echoes of the artist’s creative motivations. Intended to complement the textual contributions, the images are an amalgam of: private notebook studies; investigations; experiments; observations; and visual archive of completed works—functioning as a platform to extend discourse of themes and topics embedded within the research. The visual hierarchy and typographic treatment elicit direction from the synthesis of topics articulated within the contributors’ texts. This is made visible in the subtle layering of content that builds and recedes across the document to create a composition that considers research commonalities. The layout is also cognisant of indirect interactions of topics that take place within the artist’s work. Facilitated by the substrate, shapes and shadows from previous/subsequent spreads are subtly revealed at various junctures within the publication. The digital pattern represents cuneiform shapes of sound used to visually represent Debussy’s 1905 composition, Clair de Lune, this particular score was chosen for its complex and intriguing origin story. The torn paper that intersects the rugged landscape images expose the surface quality but also contemplates the role of sound in the formation of landscape. The symbols that puncture the cover substrate acknowledge forms/methods of communication/sound that covertly ebb in and out of the artist’s work. These design interventions attempt to capture the explorative nature and the collision of ideas that emerged within the research process
The assessment of benchmarks executed on bare-metal and using para-virtualization
A
full
assessment
of
para-virtualization
is
important,
because
without
knowledge
about
the
various
overheads,
users
can
not
understand
whether
using
virtualization
is
a
good
idea
or
not.
In
this
paper
we
are
very
interested
in
assessing
the
overheads
of
running
various
benchmarks
on
bare-‐metal,
as
well
as
on
para-‐virtualization.
The
idea
is
to
see
what
the
overheads
of
para-‐
virtualization
are,
as
well
as
looking
at
the
overheads
of
turning
on
monitoring
and
logging.
The
knowledge
from
assessing
various
benchmarks
on
these
different
systems
will
help
a
range
of
users
understand
the
use
of
virtualization
systems.
In
this
paper
we
assess
the
overheads
of
using
Xen,
VMware,
KVM
and
Citrix,
see
Table
1.
These
different
virtualization
systems
are
used
extensively
by
cloud-‐users.
We
are
using
various
Netlib1
benchmarks,
which
have
been
developed
by
the
University
of
Tennessee
at
Knoxville
(UTK),
and
Oak
Ridge
National
Laboratory
(ORNL).
In
order
to
assess
these
virtualization
systems,
we
run
the
benchmarks
on
bare-‐metal,
then
on
the
para-‐virtualization,
and
finally
we
turn
on
monitoring
and
logging.
The
later
is
important
as
users
are
interested
in
Service
Level
Agreements
(SLAs)
used
by
the
Cloud
providers,
and
the
use
of
logging
is
a
means
of
assessing
the
services
bought
and
used
from
commercial
providers.
In
this
paper
we
assess
the
virtualization
systems
on
three
different
systems.
We
use
the
Thamesblue
supercomputer,
the
Hactar
cluster
and
IBM
JS20
blade
server
(see
Table
2),
which
are
all
servers
available
at
the
University
of
Reading.
A
functional
virtualization
system
is
multi-‐layered
and
is
driven
by
the
privileged
components.
Virtualization
systems
can
host
multiple
guest
operating
systems,
which
run
on
its
own
domain,
and
the
system
schedules
virtual
CPUs
and
memory
within
each
Virtual
Machines
(VM)
to
make
the
best
use
of
the
available
resources.
The
guest-‐operating
system
schedules
each
application
accordingly.
You
can
deploy
virtualization
as
full
virtualization
or
para-‐virtualization.
Full
virtualization
provides
a
total
abstraction
of
the
underlying
physical
system
and
creates
a
new
virtual
system,
where
the
guest
operating
systems
can
run.
No
modifications
are
needed
in
the
guest
OS
or
application,
e.g.
the
guest
OS
or
application
is
not
aware
of
the
virtualized
environment
and
runs
normally.
Para-‐virualization
requires
user
modification
of
the
guest
operating
systems,
which
runs
on
the
virtual
machines,
e.g.
these
guest
operating
systems
are
aware
that
they
are
running
on
a
virtual
machine,
and
provide
near-‐native
performance.
You
can
deploy
both
para-‐virtualization
and
full
virtualization
across
various
virtualized
systems.
Para-‐virtualization
is
an
OS-‐assisted
virtualization;
where
some
modifications
are
made
in
the
guest
operating
system
to
enable
better
performance.
In
this
kind
of
virtualization,
the
guest
operating
system
is
aware
of
the
fact
that
it
is
running
on
the
virtualized
hardware
and
not
on
the
bare
hardware.
In
para-‐virtualization,
the
device
drivers
in
the
guest
operating
system
coordinate
the
device
drivers
of
host
operating
system
and
reduce
the
performance
overheads.
The
use
of
para-‐virtualization
[0]
is
intended
to
avoid
the
bottleneck
associated
with
slow
hardware
interrupts
that
exist
when
full
virtualization
is
employed.
It
has
revealed
[0]
that
para-‐
virtualization
does
not
impose
significant
performance
overhead
in
high
performance
computing,
and
this
in
turn
this
has
implications
for
the
use
of
cloud
computing
for
hosting
HPC
applications.
The
“apparent”
improvement
in
virtualization
has
led
us
to
formulate
the
hypothesis
that
certain
classes
of
HPC
applications
should
be
able
to
execute
in
a
cloud
environment,
with
minimal
performance
degradation.
In
order
to
support
this
hypothesis,
first
it
is
necessary
to
define
exactly
what
is
meant
by
a
“class”
of
application,
and
secondly
it
will
be
necessary
to
observe
application
performance,
both
within
a
virtual
machine
and
when
executing
on
bare
hardware.
A
further
potential
complication
is
associated
with
the
need
for
Cloud
service
providers
to
support
Service
Level
Agreements
(SLA),
so
that
system
utilisation
can
be
audited
Utjecaj prethodne obrade namakanjem na fermentaciju maslina Kalamata
Traditional methods of naturally black olive production employ a series of static washings prior to fermentation. This work investigates the static washings and the effects they have on the subsequent spontaneous fermentation of Kalamata olives. Significant quantities of organic carbonaceous material, including phenolic compounds, were removed during the static washings. The rate of removal peaked after four static washings, and then declined. Bacteria (including lactic acid bacteria) and yeast were found to be present in high numbers throughout the static washings. An increase in the number of static washings resulted in the removal of inhibitory phenolic compounds. This led to a reduction in the lag phase and an increase in the specific growth rate for both the yeast and lactic acid bacteria during the subsequent spontaneous fermentations. However, an increased incidence of spoilage moulds was observed in the fermentations when the olives underwent thirteen static washings.Tradicionalne metode obrade crnih maslina obuhvaćaju niz postupaka namakanja prije fermentacije. U ovom je radu istražen utjecaj takve prethodne obrade na spontanu fermentaciju maslina Kalamata. Utvrđeno je da je njihovim namakanjem uklonjena značajna količina i organskih spojeva ugljika i fenolnih spojeva. Najviše ih je pronađeno u vodi nakon četiri namakanja, a daljnjim namakanjem nije se povećala količina ispranih tvari. Mikrobiološkom analizom vode nakon namakanja utvrđen je veliki broj bakterija (uključujući i mliječno-kisele bakterije) i kvasaca. Ponovljenim namakanjem uklonjeni su inhibitorni fenolni spojevi, što dovodi do smanjenja lag faze i porasta specifične brzine rasta kvasaca i
mliječno-kiselih bakterija tijekom naknadne spontane fermentacije maslina. Međutim, nakon 13 uzastopnih namakanja pojačao se rast plijesni što uzrokuju kvarenje tijekom fermentacije
Policy Coordination in an Ecology of Water Management Games
Policy outcomes in all but the simplest policy systems emerge from a complex of ecology of games featuring multiple actors, policy institutions, and issues, and not just single policies operating in isolation. This paper updates Long\u27s (1958) ecology of games framework with Scharpf\u27s (1997) actor-centered institutionalism to analyze the coordinating roles of actors and institutions on the context of the ecology of water management games in the San Francisco Bay. Actors participating in multiple institutions are analyzed using exponential random graph models for bipartite networks representing different assumptions about policy behavior, including geographic constraints. We find that policy coordination is facilitated mostly by Federal and state agencies, and collaborative institutions that span across geographic boundaries. Network configurations associated with closure show the most significant departures from the predicted model values, consistent with the Berardo and Scholz (2010) risk hypothesis that closure is important for solving cooperation problems
A training-model scale's validity and reliability coefficients: expert evaluation in Indonesian professional psychology programs
Very little information has been available on training models in professional psychology programs in Indonesia, despite the Indonesian National Accreditation Body recommending that scientist-practitioner models be applied in the education of psychologists. By contrast, research abounds on such training models in Western countries. This discrepancy raises the importance of developing a measurement tool appropriate for assessing training models in Indonesian professional psychology programs. This article describes the process of testing the validity and reliability of such a training model measuring tool in the Indonesian context. The authors used the expert evaluation method and the Aiken formula to calculate a coefficient of content validity and item’s internal consistency reliability. This process formed a training model scale comprising 77 items with satisfactory validity and reliability indexes for measuring Indonesian professional psychology program training models
Testing Policy Theory with Statistical Models of Networks
Abstract
This paper presents a conceptual framework for clarifying the network hypotheses embedded in policy theories and how they relate to macro-level political outcomes and micro-level political behavior. We then describe the role of statistical models of networks for testing these hypotheses, including the problem of operationalizing theoretical concepts with the parameters of statistical models. Examples from existing policy research are provided and potential extensions are discussed. This paper is forthcoming as the introduction to a special issue of the Policy Studies Journal on statistical models of policy networks
Reconstruction of the Genesis Entry
An overview of the reconstruction analyses performed for the Genesis capsule entry is described. The results indicate that the actual entry prior to the drogue deployment failure was very close to the pre-entry predictions. The capsule landed 8.3 km south of the desired target at Utah Test and Training Range. Analysis on infrared video footage (obtained from the tracking stations) during the descent estimated the onset of the capsule tumble at Mach 0.9. Frequency analysis on the infrared video data indicates that the aerodynamics generated for the Genesis capsule reasonably predicted the drag and static stability. Observations of the heatshield support the pre-entry simulation estimates of a small hypersonic angles-of-attack, since there is very little, if any, charring of the shoulder region or the aftbody. Through this investigation, an overall assertion can be made that all the data gathered from the Genesis entry is consistent with flight performance that was close to the nominal preentry prediction. Consequently, the design principles and methodologies utilized for the flight dynamics, aerodynamics, and aerothermodynamics analyses have been corroborated
- …